Deep Learning: A Simple Example¶
Let’s get back to the Name Gender Classifier.

Prepare Data¶
import numpy as np
import nltk
from nltk.corpus import names
import random
labeled_names = ([(name, 1) for name in names.words('male.txt')] +
[(name, 0) for name in names.words('female.txt')])
random.shuffle(labeled_names)
Train-Test Split¶
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(labeled_names,
test_size=0.2,
random_state=42)
print(len(train_set), len(test_set))
6355 1589
import tensorflow as tf
import tensorflow.keras as keras
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.utils import to_categorical, plot_model
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM, RNN, GRU
from keras.layers import Embedding
from keras.layers import SpatialDropout1D
names = [n for (n, l) in train_set]
labels = [l for (n, l) in train_set]
len(names)
6355
Tokenizer¶
By default, the token index 0 is reserved for padding token.
If
oov_tokenis specified, it is default to index 1.Specify
num_wordsfor tokenizer to include only top N words in the modelTokenizer will automatically remove puntuations.
Tokenizer use whitespace as word delimiter.
If every character is treated as a token, specify
char_level=True.
tokenizer = Tokenizer(char_level=True)
tokenizer.fit_on_texts(names)
Prepare Input and Output Tensors¶
Like in feature-based machine translation, a computational model only accepts numeric values. It is necessary to convert raw text to numeric tensor for neural network.
After we create the Tokenizer, we use the Tokenizer to perform text vectorization, i.e., converting texts into tensors.
In deep learning, words or characters are automatically converted into numeric representations.
In other words, the feature engineering step is fully automatic.
Two Ways of Text Vectorization¶
Texts to Sequences: Integer encoding of tokens in texts and learn token embeddings
Texts to Matrix: One-hot encoding of texts (similar to bag-of-words model)
Method 1: Text to Sequences¶
From Texts and Sequences¶
Text to Sequences
Padding to uniform lengths for each text
names_ints = tokenizer.texts_to_sequences(names)
print(names[:10])
print(names_ints[:10])
print(labels[:10])
['Ivy', 'Forrest', 'Trula', 'Vail', 'Joelynn', 'Yard', 'Krysta', 'Melamie', 'Mikhail', 'Leone']
[[3, 20, 11], [21, 7, 5, 5, 2, 9, 8], [8, 5, 16, 6, 1], [20, 1, 3, 6], [19, 7, 2, 6, 11, 4, 4], [11, 1, 5, 10], [18, 5, 11, 9, 8, 1], [12, 2, 6, 1, 12, 3, 2], [12, 3, 18, 13, 1, 3, 6], [6, 2, 7, 4, 2]]
[0, 1, 0, 1, 0, 1, 0, 0, 1, 0]
Vocabulary¶
# determine the vocabulary size
vocab_size = len(tokenizer.word_index) + 1
print('Vocabulary Size: %d' % vocab_size)
Vocabulary Size: 30
tokenizer.word_index
{'a': 1,
'e': 2,
'i': 3,
'n': 4,
'r': 5,
'l': 6,
'o': 7,
't': 8,
's': 9,
'd': 10,
'y': 11,
'm': 12,
'h': 13,
'c': 14,
'b': 15,
'u': 16,
'g': 17,
'k': 18,
'j': 19,
'v': 20,
'f': 21,
'p': 22,
'w': 23,
'z': 24,
'q': 25,
'x': 26,
'-': 27,
' ': 28,
"'": 29}
Padding¶
When padding the all texts into uniform lengths, consider whether to Pre-padding or removing values from the beginning of the sequence (i.e.,
pre) or the other way (post).Check
paddingandtruncatingparameters inpad_sequences
names_lens = [len(n) for n in names_ints]
names_lens
import seaborn as sns
sns.displot(names_lens)
print(names[np.argmax(names_lens)]) # longest name
Helen-Elizabeth
max_len = names_lens[np.argmax(names_lens)]
max_len
15
names_ints_pad = sequence.pad_sequences(names_ints, maxlen=max_len)
names_ints_pad[:10]
array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 20, 11],
[ 0, 0, 0, 0, 0, 0, 0, 0, 21, 7, 5, 5, 2, 9, 8],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 5, 16, 6, 1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 20, 1, 3, 6],
[ 0, 0, 0, 0, 0, 0, 0, 0, 19, 7, 2, 6, 11, 4, 4],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11, 1, 5, 10],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 18, 5, 11, 9, 8, 1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 12, 2, 6, 1, 12, 3, 2],
[ 0, 0, 0, 0, 0, 0, 0, 0, 12, 3, 18, 13, 1, 3, 6],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 2, 7, 4, 2]],
dtype=int32)
Define X and Y¶
X_train = np.array(names_ints_pad).astype('int32')
y_train = np.array(labels)
X_test = np.array(
sequence.pad_sequences(tokenizer.texts_to_sequences(
[n for (n, l) in test_set]),
maxlen=max_len)).astype('int32')
y_test = np.array([l for (n, l) in test_set])
X_test_texts = [n for (n, l) in test_set]
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
(6355, 15)
(6355,)
(1589, 15)
(1589,)
Method 2: Text to Matrix¶
One-Hot Encoding¶
Text to Matrix (to create bag-of-word representation of each text)
Choose modes: binary, count, or tfidf
names_matrix = tokenizer.texts_to_matrix(names, mode="binary")
names[2]
'Trula'
names_matrixin fact is a bag-of-characters representation of a name text.
import pandas as pd
pd.DataFrame(names_matrix[2, 1:],
columns=["ONE-HOT"],
index=list(tokenizer.word_index.keys()))
| ONE-HOT | |
|---|---|
| a | 1.0 |
| e | 0.0 |
| i | 0.0 |
| n | 0.0 |
| r | 1.0 |
| l | 1.0 |
| o | 0.0 |
| t | 1.0 |
| s | 0.0 |
| d | 0.0 |
| y | 0.0 |
| m | 0.0 |
| h | 0.0 |
| c | 0.0 |
| b | 0.0 |
| u | 1.0 |
| g | 0.0 |
| k | 0.0 |
| j | 0.0 |
| v | 0.0 |
| f | 0.0 |
| p | 0.0 |
| w | 0.0 |
| z | 0.0 |
| q | 0.0 |
| x | 0.0 |
| - | 0.0 |
| 0.0 | |
| ' | 0.0 |
Define X and Y¶
X_train2 = np.array(names_matrix).astype('int32')
y_train2 = np.array(labels)
X_test2 = tokenizer.texts_to_matrix([n for (n, l) in test_set],
mode="binary").astype('int32')
y_test2 = np.array([l for (n, l) in test_set])
X_test2_texts = [n for (n, l) in test_set]
print(X_train2.shape)
print(y_train2.shape)
print(X_test2.shape)
print(y_test2.shape)
(6355, 30)
(6355,)
(1589, 30)
(1589,)
Model Definition¶
Three important steps for building a deep neural network:
Define the model structure
Compile the model
Fit the model
After we have defined our input and output tensors (X and y), we can define the architecture of our neural network model.
For the two ways of name vectorized representations, we try two different network structures.
Text to Matrix: Fully connected Dense Layers
Text to Sequences: Embedding + RNN
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
# Plotting results
def plot1(history):
matplotlib.rcParams['figure.dpi'] = 100
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
## Accuracy plot
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
## Loss plot
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
def plot2(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
#plt.gca().set_ylim(0,1)
plt.show()
Model 1: Fully Connected Dense Layers¶
Let’s try a simple neural network with two fully-connected dense layers with the Text-to-Matrix inputs.
That is, the input of this model is the bag-of-words representation of the entire name.

Dense Layer Operation¶
The transformation of each Dense layer will transform the input tensor into a tensor whose dimension size is the same as the node number of the Dense layer.

## Define Model
from keras import layers
model1 = keras.Sequential()
model1.add(keras.Input(shape=(vocab_size, ), name="one_hot_input"))
model1.add(layers.Dense(16, activation="relu", name="dense_layer_1"))
model1.add(layers.Dense(16, activation="relu", name="dense_layer_2"))
model1.add(layers.Dense(1, activation="sigmoid", name="output"))
## Compile Model
model1.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model1, show_shapes=True)
A few hyperparameters for network training¶
Batch Size: The number of inputs needed per update of the model parameter (gradient descent)
Epoch: How many iterations needed for training
Validation Split Ratio: Proportion of validation and training data split
## Hyperparameters
BATCH_SIZE = 128
EPOCHS = 20
VALIDATION_SPLIT = 0.2
## Fit the model
history1 = model1.fit(X_train2,
y_train2,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 2s - loss: 0.6730 - accuracy: 0.5901 - val_loss: 0.6497 - val_accuracy: 0.6404
Epoch 2/20
40/40 - 0s - loss: 0.6501 - accuracy: 0.6316 - val_loss: 0.6368 - val_accuracy: 0.6444
Epoch 3/20
40/40 - 0s - loss: 0.6360 - accuracy: 0.6459 - val_loss: 0.6235 - val_accuracy: 0.6617
Epoch 4/20
40/40 - 0s - loss: 0.6212 - accuracy: 0.6631 - val_loss: 0.6105 - val_accuracy: 0.6719
Epoch 5/20
40/40 - 0s - loss: 0.6077 - accuracy: 0.6703 - val_loss: 0.6019 - val_accuracy: 0.6782
Epoch 6/20
40/40 - 0s - loss: 0.5971 - accuracy: 0.6835 - val_loss: 0.5956 - val_accuracy: 0.6790
Epoch 7/20
40/40 - 0s - loss: 0.5892 - accuracy: 0.6888 - val_loss: 0.5895 - val_accuracy: 0.6861
Epoch 8/20
40/40 - 0s - loss: 0.5828 - accuracy: 0.6916 - val_loss: 0.5871 - val_accuracy: 0.6869
Epoch 9/20
40/40 - 0s - loss: 0.5775 - accuracy: 0.7022 - val_loss: 0.5822 - val_accuracy: 0.6932
Epoch 10/20
40/40 - 0s - loss: 0.5726 - accuracy: 0.7057 - val_loss: 0.5793 - val_accuracy: 0.6939
Epoch 11/20
40/40 - 0s - loss: 0.5690 - accuracy: 0.7063 - val_loss: 0.5779 - val_accuracy: 0.6987
Epoch 12/20
40/40 - 0s - loss: 0.5656 - accuracy: 0.7071 - val_loss: 0.5762 - val_accuracy: 0.7034
Epoch 13/20
40/40 - 0s - loss: 0.5630 - accuracy: 0.7111 - val_loss: 0.5782 - val_accuracy: 0.6979
Epoch 14/20
40/40 - 0s - loss: 0.5612 - accuracy: 0.7111 - val_loss: 0.5745 - val_accuracy: 0.7042
Epoch 15/20
40/40 - 0s - loss: 0.5580 - accuracy: 0.7144 - val_loss: 0.5747 - val_accuracy: 0.6987
Epoch 16/20
40/40 - 0s - loss: 0.5565 - accuracy: 0.7166 - val_loss: 0.5720 - val_accuracy: 0.7034
Epoch 17/20
40/40 - 0s - loss: 0.5545 - accuracy: 0.7179 - val_loss: 0.5717 - val_accuracy: 0.7050
Epoch 18/20
40/40 - 0s - loss: 0.5530 - accuracy: 0.7181 - val_loss: 0.5709 - val_accuracy: 0.7065
Epoch 19/20
40/40 - 0s - loss: 0.5518 - accuracy: 0.7209 - val_loss: 0.5707 - val_accuracy: 0.7081
Epoch 20/20
40/40 - 0s - loss: 0.5498 - accuracy: 0.7260 - val_loss: 0.5686 - val_accuracy: 0.7026
plot1(history1)
model1.evaluate(X_test2, y_test2, batch_size=128, verbose=2)
13/13 - 0s - loss: 0.5687 - accuracy: 0.7042
[0.5686534643173218, 0.704216480255127]
Model 2: Embedding + RNN¶
Another possibility is to introduce an embedding layer in the network, which transforms each character in the name into a tensor (i.e., embeddings), and then to add a Recurrent Neural Network layer to process each character sequentially.
The strength of the RNN is that it iterates over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far.
It is posited that after the RNN iterates through the entire sequence, it keeps important information of all previously iterated tokens for further operation.
The input of this network is a padded sequence of the original text (name).

Embedding Layer Operation¶

RNN Layer Operation¶

RNN Layer Operation¶

Unrolled Version of RNN Operation¶

Unrolled Version of RNN Operation¶

## Define the embedding dimension
EMBEDDING_DIM = 128
## Define model
model2 = Sequential()
model2.add(
Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_DIM,
input_length=max_len,
mask_zero=True))
model2.add(layers.SimpleRNN(16, activation="relu", name="RNN_layer"))
model2.add(Dense(16, activation="relu", name="dense_layer"))
model2.add(Dense(1, activation="sigmoid", name="output"))
model2.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model2, show_shapes=True)
history2 = model2.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 2s - loss: 0.6301 - accuracy: 0.6849 - val_loss: 0.5575 - val_accuracy: 0.7278
Epoch 2/20
40/40 - 0s - loss: 0.4917 - accuracy: 0.7537 - val_loss: 0.4768 - val_accuracy: 0.7569
Epoch 3/20
40/40 - 0s - loss: 0.4363 - accuracy: 0.7825 - val_loss: 0.4526 - val_accuracy: 0.7632
Epoch 4/20
40/40 - 0s - loss: 0.4224 - accuracy: 0.7925 - val_loss: 0.4414 - val_accuracy: 0.7718
Epoch 5/20
40/40 - 0s - loss: 0.4117 - accuracy: 0.8021 - val_loss: 0.4418 - val_accuracy: 0.7789
Epoch 6/20
40/40 - 0s - loss: 0.4079 - accuracy: 0.8017 - val_loss: 0.4385 - val_accuracy: 0.7821
Epoch 7/20
40/40 - 0s - loss: 0.4036 - accuracy: 0.8084 - val_loss: 0.4351 - val_accuracy: 0.7836
Epoch 8/20
40/40 - 0s - loss: 0.3943 - accuracy: 0.8110 - val_loss: 0.4372 - val_accuracy: 0.7836
Epoch 9/20
40/40 - 0s - loss: 0.3917 - accuracy: 0.8104 - val_loss: 0.4356 - val_accuracy: 0.7844
Epoch 10/20
40/40 - 0s - loss: 0.3897 - accuracy: 0.8092 - val_loss: 0.4311 - val_accuracy: 0.7876
Epoch 11/20
40/40 - 0s - loss: 0.3837 - accuracy: 0.8157 - val_loss: 0.4312 - val_accuracy: 0.7891
Epoch 12/20
40/40 - 0s - loss: 0.3817 - accuracy: 0.8159 - val_loss: 0.4314 - val_accuracy: 0.7907
Epoch 13/20
40/40 - 0s - loss: 0.3799 - accuracy: 0.8188 - val_loss: 0.4298 - val_accuracy: 0.7884
Epoch 14/20
40/40 - 0s - loss: 0.3750 - accuracy: 0.8220 - val_loss: 0.4326 - val_accuracy: 0.7884
Epoch 15/20
40/40 - 0s - loss: 0.3754 - accuracy: 0.8247 - val_loss: 0.4327 - val_accuracy: 0.7891
Epoch 16/20
40/40 - 0s - loss: 0.3703 - accuracy: 0.8265 - val_loss: 0.4316 - val_accuracy: 0.7946
Epoch 17/20
40/40 - 0s - loss: 0.3710 - accuracy: 0.8218 - val_loss: 0.4326 - val_accuracy: 0.7891
Epoch 18/20
40/40 - 0s - loss: 0.3677 - accuracy: 0.8257 - val_loss: 0.4318 - val_accuracy: 0.7891
Epoch 19/20
40/40 - 0s - loss: 0.3651 - accuracy: 0.8308 - val_loss: 0.4301 - val_accuracy: 0.7931
Epoch 20/20
40/40 - 0s - loss: 0.3615 - accuracy: 0.8293 - val_loss: 0.4293 - val_accuracy: 0.7946
plot1(history2)
model2.evaluate(X_test, y_test, batch_size=128, verbose=2)
13/13 - 0s - loss: 0.4122 - accuracy: 0.8118
[0.4122048318386078, 0.8118313550949097]
Model 3: Regularization and Dropout¶
Based on the validation results of the previous two models, we can see that they are probably a bit overfit because the model performance on the validation set starts to stall after the first few epochs.
We can add regularization and dropouts in our network definition to avoid overfitting.
## Define embedding dimension
EMBEDDING_DIM = 128
## Define model
model3 = Sequential()
model3.add(
Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_DIM,
input_length=max_len,
mask_zero=True))
model3.add(
layers.SimpleRNN(16,
activation="relu",
name="RNN_layer",
dropout=0.2,
recurrent_dropout=0.2)) ## add dropout
model3.add(Dense(16, activation="relu", name="dense_layer"))
model3.add(Dense(1, activation="sigmoid", name="output"))
model3.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model3)
history3 = model3.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 2s - loss: 0.6677 - accuracy: 0.6528 - val_loss: 0.6221 - val_accuracy: 0.7120
Epoch 2/20
40/40 - 0s - loss: 0.5499 - accuracy: 0.7295 - val_loss: 0.4968 - val_accuracy: 0.7679
Epoch 3/20
40/40 - 0s - loss: 0.4652 - accuracy: 0.7665 - val_loss: 0.4655 - val_accuracy: 0.7687
Epoch 4/20
40/40 - 0s - loss: 0.4496 - accuracy: 0.7683 - val_loss: 0.4566 - val_accuracy: 0.7703
Epoch 5/20
40/40 - 0s - loss: 0.4411 - accuracy: 0.7799 - val_loss: 0.4525 - val_accuracy: 0.7758
Epoch 6/20
40/40 - 0s - loss: 0.4365 - accuracy: 0.7860 - val_loss: 0.4501 - val_accuracy: 0.7710
Epoch 7/20
40/40 - 0s - loss: 0.4288 - accuracy: 0.7893 - val_loss: 0.4439 - val_accuracy: 0.7781
Epoch 8/20
40/40 - 0s - loss: 0.4273 - accuracy: 0.7954 - val_loss: 0.4446 - val_accuracy: 0.7718
Epoch 9/20
40/40 - 0s - loss: 0.4279 - accuracy: 0.7862 - val_loss: 0.4394 - val_accuracy: 0.7781
Epoch 10/20
40/40 - 0s - loss: 0.4214 - accuracy: 0.7941 - val_loss: 0.4377 - val_accuracy: 0.7742
Epoch 11/20
40/40 - 0s - loss: 0.4241 - accuracy: 0.7927 - val_loss: 0.4349 - val_accuracy: 0.7805
Epoch 12/20
40/40 - 0s - loss: 0.4153 - accuracy: 0.7950 - val_loss: 0.4358 - val_accuracy: 0.7821
Epoch 13/20
40/40 - 0s - loss: 0.4167 - accuracy: 0.7925 - val_loss: 0.4415 - val_accuracy: 0.7773
Epoch 14/20
40/40 - 0s - loss: 0.4138 - accuracy: 0.8033 - val_loss: 0.4335 - val_accuracy: 0.7797
Epoch 15/20
40/40 - 0s - loss: 0.4089 - accuracy: 0.7980 - val_loss: 0.4409 - val_accuracy: 0.7828
Epoch 16/20
40/40 - 0s - loss: 0.4138 - accuracy: 0.7958 - val_loss: 0.4355 - val_accuracy: 0.7797
Epoch 17/20
40/40 - 0s - loss: 0.4130 - accuracy: 0.7992 - val_loss: 0.4330 - val_accuracy: 0.7797
Epoch 18/20
40/40 - 0s - loss: 0.4106 - accuracy: 0.7966 - val_loss: 0.4332 - val_accuracy: 0.7836
Epoch 19/20
40/40 - 0s - loss: 0.4062 - accuracy: 0.7996 - val_loss: 0.4293 - val_accuracy: 0.7821
Epoch 20/20
40/40 - 0s - loss: 0.4140 - accuracy: 0.7945 - val_loss: 0.4298 - val_accuracy: 0.7734
plot1(history3)
model3.evaluate(X_test, y_test, batch_size=128, verbose=2)
13/13 - 0s - loss: 0.4066 - accuracy: 0.8175
[0.40660879015922546, 0.8174952864646912]
Model 4: Improve the Models¶
In addition to regularization and dropouts, we can further improve the model by increasing the model complexity.
In particular, we can increase the depths and widths of the network layers.
Let’s try stacking two RNN layers.
Tip
When we stack two sequence layers (e.g., RNN), we need to make sure that the hidden states (outputs) of the first sequence layer at all timesteps are properly passed onto the next sequence layer, not just the hidden state (output) of the last timestep.
In keras, this usually means that we need to set the argument return_sequences=True in a sequence layer (e.g., SimpleRNN, LSTM, GRU etc).
## Define embedding dimension
MBEDDING_DIM = 128
## Define model
model4 = Sequential()
model4.add(
Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_DIM,
input_length=max_len,
mask_zero=True))
model4.add(
layers.SimpleRNN(16,
activation="relu",
name="RNN_layer_1",
dropout=0.2,
recurrent_dropout=0.5,
return_sequences=True)) ## To ensure the hidden states of all timesteps are pased down to next layer
model4.add(
layers.SimpleRNN(16,
activation="relu",
name="RNN_layer_2",
dropout=0.2,
recurrent_dropout=0.5))
model4.add(Dense(1, activation="sigmoid", name="output"))
## Compile model
model4.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model4)
history4 = model4.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 3s - loss: 0.6527 - accuracy: 0.6223 - val_loss: 0.6090 - val_accuracy: 0.6404
Epoch 2/20
40/40 - 0s - loss: 0.6050 - accuracy: 0.6277 - val_loss: 0.5654 - val_accuracy: 0.6491
Epoch 3/20
40/40 - 0s - loss: 0.5609 - accuracy: 0.6528 - val_loss: 0.5366 - val_accuracy: 0.7207
Epoch 4/20
40/40 - 0s - loss: 0.5317 - accuracy: 0.7061 - val_loss: 0.5238 - val_accuracy: 0.7168
Epoch 5/20
40/40 - 0s - loss: 0.5161 - accuracy: 0.7238 - val_loss: 0.5155 - val_accuracy: 0.7254
Epoch 6/20
40/40 - 0s - loss: 0.5157 - accuracy: 0.7352 - val_loss: 0.5135 - val_accuracy: 0.7223
Epoch 7/20
40/40 - 0s - loss: 0.5085 - accuracy: 0.7343 - val_loss: 0.5121 - val_accuracy: 0.7215
Epoch 8/20
40/40 - 0s - loss: 0.5004 - accuracy: 0.7500 - val_loss: 0.5101 - val_accuracy: 0.7238
Epoch 9/20
40/40 - 0s - loss: 0.4994 - accuracy: 0.7470 - val_loss: 0.5058 - val_accuracy: 0.7262
Epoch 10/20
40/40 - 0s - loss: 0.4930 - accuracy: 0.7563 - val_loss: 0.5032 - val_accuracy: 0.7325
Epoch 11/20
40/40 - 0s - loss: 0.4909 - accuracy: 0.7571 - val_loss: 0.4972 - val_accuracy: 0.7301
Epoch 12/20
40/40 - 0s - loss: 0.4846 - accuracy: 0.7573 - val_loss: 0.4938 - val_accuracy: 0.7317
Epoch 13/20
40/40 - 0s - loss: 0.4846 - accuracy: 0.7526 - val_loss: 0.4873 - val_accuracy: 0.7349
Epoch 14/20
40/40 - 0s - loss: 0.4815 - accuracy: 0.7508 - val_loss: 0.4841 - val_accuracy: 0.7317
Epoch 15/20
40/40 - 0s - loss: 0.4754 - accuracy: 0.7624 - val_loss: 0.4791 - val_accuracy: 0.7498
Epoch 16/20
40/40 - 0s - loss: 0.4648 - accuracy: 0.7689 - val_loss: 0.4749 - val_accuracy: 0.7522
Epoch 17/20
40/40 - 0s - loss: 0.4627 - accuracy: 0.7693 - val_loss: 0.4699 - val_accuracy: 0.7490
Epoch 18/20
40/40 - 0s - loss: 0.4682 - accuracy: 0.7649 - val_loss: 0.4673 - val_accuracy: 0.7553
Epoch 19/20
40/40 - 0s - loss: 0.4658 - accuracy: 0.7675 - val_loss: 0.4672 - val_accuracy: 0.7498
Epoch 20/20
40/40 - 0s - loss: 0.4656 - accuracy: 0.7655 - val_loss: 0.4666 - val_accuracy: 0.7514
plot1(history4)
model4.evaluate(X_test, y_test, batch_size=128, verbose=2)
13/13 - 0s - loss: 0.4327 - accuracy: 0.7936
[0.43272581696510315, 0.7935808897018433]
Model 5: Bidirectional¶
We can also increase the model complexity in at least two possible ways:
Use more advanced RNNs, such as LSTM or GRU
Process the sequence in two directions
Increase the hidden nodes of the RNN/LSTM
Now let’s try the more sophisticated RNN, LSTM, and with bidirectional sequence processing and add more nodes to the LSTM layer.
## Define embedding dimension
EMBEDDING_DIM = 128
## Define model
model5 = Sequential()
model5.add(
Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_DIM,
input_length=max_len,
mask_zero=True))
model5.add(
layers.Bidirectional( ## Bidirectional sequence processing
LSTM(32,
activation="relu",
name="lstm_layer_1",
dropout=0.2,
recurrent_dropout=0.5,
return_sequences=True)))
model5.add(
layers.Bidirectional( ## Bidirectional sequence processing
LSTM(32,
activation="relu",
name="lstm_layer_2",
dropout=0.2,
recurrent_dropout=0.5)))
model5.add(Dense(1, activation="sigmoid", name="output"))
model5.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model5)
history5 = model5.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 13s - loss: 0.6574 - accuracy: 0.6255 - val_loss: 0.6185 - val_accuracy: 0.6404
Epoch 2/20
40/40 - 2s - loss: 0.5829 - accuracy: 0.6629 - val_loss: 0.5311 - val_accuracy: 0.7301
Epoch 3/20
40/40 - 2s - loss: 0.5027 - accuracy: 0.7425 - val_loss: 0.4978 - val_accuracy: 0.7435
Epoch 4/20
40/40 - 2s - loss: 0.4677 - accuracy: 0.7693 - val_loss: 0.4870 - val_accuracy: 0.7655
Epoch 5/20
40/40 - 2s - loss: 0.4554 - accuracy: 0.7775 - val_loss: 0.4625 - val_accuracy: 0.7766
Epoch 6/20
40/40 - 2s - loss: 0.4391 - accuracy: 0.7901 - val_loss: 0.4546 - val_accuracy: 0.7805
Epoch 7/20
40/40 - 2s - loss: 0.4228 - accuracy: 0.7982 - val_loss: 0.4561 - val_accuracy: 0.7703
Epoch 8/20
40/40 - 2s - loss: 0.4253 - accuracy: 0.7988 - val_loss: 0.4408 - val_accuracy: 0.7844
Epoch 9/20
40/40 - 2s - loss: 0.4158 - accuracy: 0.8019 - val_loss: 0.4379 - val_accuracy: 0.7876
Epoch 10/20
40/40 - 2s - loss: 0.4162 - accuracy: 0.8049 - val_loss: 0.4329 - val_accuracy: 0.7899
Epoch 11/20
40/40 - 2s - loss: 0.4085 - accuracy: 0.8080 - val_loss: 0.4329 - val_accuracy: 0.7915
Epoch 12/20
40/40 - 2s - loss: 0.4051 - accuracy: 0.8070 - val_loss: 0.4294 - val_accuracy: 0.7923
Epoch 13/20
40/40 - 2s - loss: 0.4065 - accuracy: 0.8070 - val_loss: 0.4290 - val_accuracy: 0.7891
Epoch 14/20
40/40 - 2s - loss: 0.4014 - accuracy: 0.8084 - val_loss: 0.4282 - val_accuracy: 0.7876
Epoch 15/20
40/40 - 2s - loss: 0.3971 - accuracy: 0.8070 - val_loss: 0.4249 - val_accuracy: 0.7986
Epoch 16/20
40/40 - 2s - loss: 0.3986 - accuracy: 0.8096 - val_loss: 0.4215 - val_accuracy: 0.7962
Epoch 17/20
40/40 - 2s - loss: 0.3950 - accuracy: 0.8120 - val_loss: 0.4184 - val_accuracy: 0.7986
Epoch 18/20
40/40 - 2s - loss: 0.3891 - accuracy: 0.8106 - val_loss: 0.4202 - val_accuracy: 0.7978
Epoch 19/20
40/40 - 2s - loss: 0.3918 - accuracy: 0.8090 - val_loss: 0.4246 - val_accuracy: 0.7970
Epoch 20/20
40/40 - 2s - loss: 0.3850 - accuracy: 0.8116 - val_loss: 0.4234 - val_accuracy: 0.7946
plot1(history5)
model5.evaluate(X_test, y_test, batch_size=128, verbose=2)
13/13 - 0s - loss: 0.4050 - accuracy: 0.8125
[0.40498316287994385, 0.8124606609344482]
Check Embeddings¶
Compared to one-hot encodings of characters, embeddings may include more information relating to the characteristics (semantics?) of the characters.
We can extract the embedding layer and apply dimensional reduction techniques (i.e., TSNE) to see how embeddings capture the relationships in-between characters.
## A name in sequence from test set
X_test[10]
array([ 0, 0, 0, 0, 0, 0, 0, 0, 9, 13, 2, 22, 1, 5, 10],
dtype=int32)
ind2char = tokenizer.index_word
[ind2char.get(i) for i in X_test[10] if ind2char.get(i) != None]
['s', 'h', 'e', 'p', 'a', 'r', 'd']
tokenizer.texts_to_sequences('Alvin')
[[1], [6], [20], [3], [4]]
char_vectors = model5.layers[0].get_weights()[0]
char_vectors.shape
(30, 128)
labels = [char for (ind, char) in tokenizer.index_word.items()]
labels.insert(0, None)
labels
[None,
'a',
'e',
'i',
'n',
'r',
'l',
'o',
't',
's',
'd',
'y',
'm',
'h',
'c',
'b',
'u',
'g',
'k',
'j',
'v',
'f',
'p',
'w',
'z',
'q',
'x',
'-',
' ',
"'"]
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0, n_iter=5000, perplexity=3)
np.set_printoptions(suppress=True)
T = tsne.fit_transform(char_vectors)
labels = labels
plt.figure(figsize=(10, 7), dpi=150)
plt.scatter(T[:, 0], T[:, 1], c='orange', edgecolors='r')
for label, x, y in zip(labels, T[:, 0], T[:, 1]):
plt.annotate(label,
xy=(x + 1, y + 1),
xytext=(0, 0),
textcoords='offset points')
Issues of Word/Character Representations¶
One-hot encoding does not indicate semantic relationships between characters.
For deep learning NLP, it is preferred to convert one-hot encodings of words/characters into embeddings, which are argued to include more semantic information of the tokens.
Now the question is how to train and create better word embeddings. We will come back to this issue later.
Hyperparameter Tuning¶
Note
Please install keras tuner module in your current conda:
pip install -U keras-tuner
or
conda install -c conda-forge keras-tuner
Like feature-based ML methods, neural networks also come with many hyperparameters, which require default values.
Typical hyperparameters include:
Number of nodes for the layer
Learning Rates
We can utilize the module,
keras-tuner, to fine-tune the hyperparameters (i.e., to find the values that optimize the model performance).
Steps for Keras Tuner
First, wrap the model definition in a function, which takes a single
hpargument.Inside this function, replace any value we want to tune with a call to hyperparameter sampling methods, e.g.
hp.Int()orhp.Choice(). The function should return a compiled model.Next, instantiate a
tunerobject specifying our optimization objective and other search parameters.Finally, start the search with the
search()method, which takes the same arguments asModel.fit()in keras.When the search is over, we can retrieve the best model and a summary of the results from the
tunner.
## confirm if the right kernel is being used
import sys
sys.executable
'/Users/Alvin/opt/anaconda3/envs/python-notes/bin/python'
import kerastuner
## Wrap model definition in a function
## and specify the parameters needed for tuning
# def build_model(hp):
# model1 = keras.Sequential()
# model1.add(keras.Input(shape=(max_len,)))
# model1.add(layers.Dense(hp.Int('units', min_value=32, max_value=128, step=32), activation="relu", name="dense_layer_1"))
# model1.add(layers.Dense(hp.Int('units', min_value=32, max_value=128, step=32), activation="relu", name="dense_layer_2"))
# model1.add(layers.Dense(2, activation="softmax", name="output"))
# model1.compile(
# optimizer=keras.optimizers.Adam(
# hp.Choice('learning_rate',
# values=[1e-2, 1e-3, 1e-4])),
# loss='sparse_categorical_crossentropy',
# metrics=['accuracy'])
# return model1
def build_model(hp):
m = Sequential()
m.add(
Embedding(input_dim=vocab_size,
output_dim=hp.Int('output_dim', ## tuning 2
min_value=32,
max_value=128,
step=32),
input_length=max_len,
mask_zero=True))
m.add(
layers.Bidirectional(
LSTM(hp.Int('units', min_value=16, max_value=64, step=16), ## tuning 1
activation="relu",
dropout=0.2,
recurrent_dropout=0.2)))
m.add(Dense(1, activation="sigmoid", name="output"))
m.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
return m
## This is to clean up the temp dir from the tuner
## Every time we re-start the tunner, it's better to keep the temp dir clean
import os
import shutil
if os.path.isdir('my_dir'):
shutil.rmtree('my_dir')
The
max_trialsvariable represents the maximum number of trials that a hyperparameter combination would run.The
execution_per_trialvariable is the number of models that should be built and fit for each trial for robustness purposes.
## Instantiate the tunner
tuner = kerastuner.tuners.RandomSearch(build_model,
objective='val_accuracy',
max_trials=10,
executions_per_trial=2,
directory='my_dir')
## Check the tuner's search space
tuner.search_space_summary()
Search space summary
output_dim (Int)
units (Int)
%%time
## Start tuning with the tuner
tuner.search(X_train, y_train, validation_split=0.2, batch_size=128)
40/40 [==============================] - ETA: 3:23 - loss: 0.6935 - accuracy: 0.48 - ETA: 1s - loss: 0.6927 - accuracy: 0.5408 - ETA: 0s - loss: 0.6916 - accuracy: 0.57 - ETA: 0s - loss: 0.6909 - accuracy: 0.57 - ETA: 0s - loss: 0.6901 - accuracy: 0.58 - ETA: 0s - loss: 0.6894 - accuracy: 0.58 - ETA: 0s - loss: 0.6887 - accuracy: 0.58 - ETA: 0s - loss: 0.6879 - accuracy: 0.59 - ETA: 0s - loss: 0.6870 - accuracy: 0.59 - ETA: 0s - loss: 0.6856 - accuracy: 0.59 - ETA: 0s - loss: 0.6847 - accuracy: 0.60 - ETA: 0s - loss: 0.6839 - accuracy: 0.60 - ETA: 0s - loss: 0.6831 - accuracy: 0.60 - ETA: 0s - loss: 0.6822 - accuracy: 0.60 - ETA: 0s - loss: 0.6810 - accuracy: 0.60 - ETA: 0s - loss: 0.6802 - accuracy: 0.60 - ETA: 0s - loss: 0.6789 - accuracy: 0.60 - ETA: 0s - loss: 0.6781 - accuracy: 0.61 - 7s 47ms/step - loss: 0.6777 - accuracy: 0.6103 - val_loss: 0.6219 - val_accuracy: 0.6404
40/40 [==============================] - ETA: 3:17 - loss: 0.6928 - accuracy: 0.48 - ETA: 1s - loss: 0.6912 - accuracy: 0.5651 - ETA: 0s - loss: 0.6900 - accuracy: 0.59 - ETA: 0s - loss: 0.6892 - accuracy: 0.60 - ETA: 0s - loss: 0.6878 - accuracy: 0.61 - ETA: 0s - loss: 0.6871 - accuracy: 0.61 - ETA: 0s - loss: 0.6863 - accuracy: 0.62 - ETA: 0s - loss: 0.6852 - accuracy: 0.62 - ETA: 0s - loss: 0.6845 - accuracy: 0.62 - ETA: 0s - loss: 0.6833 - accuracy: 0.62 - ETA: 0s - loss: 0.6827 - accuracy: 0.62 - ETA: 0s - loss: 0.6821 - accuracy: 0.62 - ETA: 0s - loss: 0.6815 - accuracy: 0.62 - ETA: 0s - loss: 0.6807 - accuracy: 0.62 - ETA: 0s - loss: 0.6799 - accuracy: 0.62 - ETA: 0s - loss: 0.6791 - accuracy: 0.62 - ETA: 0s - loss: 0.6784 - accuracy: 0.62 - ETA: 0s - loss: 0.6776 - accuracy: 0.62 - ETA: 0s - loss: 0.6768 - accuracy: 0.62 - 7s 45ms/step - loss: 0.6764 - accuracy: 0.6244 - val_loss: 0.6196 - val_accuracy: 0.6404
Trial complete
Trial summary
Hyperparameters:
40/40 [==============================] - ETA: 3:16 - loss: 0.6952 - accuracy: 0.35 - ETA: 1s - loss: 0.6942 - accuracy: 0.4280 - ETA: 0s - loss: 0.6935 - accuracy: 0.46 - ETA: 0s - loss: 0.6927 - accuracy: 0.48 - ETA: 0s - loss: 0.6919 - accuracy: 0.50 - ETA: 0s - loss: 0.6911 - accuracy: 0.51 - ETA: 0s - loss: 0.6903 - accuracy: 0.52 - ETA: 0s - loss: 0.6895 - accuracy: 0.53 - ETA: 0s - loss: 0.6888 - accuracy: 0.54 - ETA: 0s - loss: 0.6879 - accuracy: 0.54 - ETA: 0s - loss: 0.6871 - accuracy: 0.55 - ETA: 0s - loss: 0.6862 - accuracy: 0.55 - ETA: 0s - loss: 0.6853 - accuracy: 0.56 - ETA: 0s - loss: 0.6844 - accuracy: 0.56 - ETA: 0s - loss: 0.6835 - accuracy: 0.56 - ETA: 0s - loss: 0.6824 - accuracy: 0.57 - ETA: 0s - loss: 0.6813 - accuracy: 0.57 - ETA: 0s - loss: 0.6803 - accuracy: 0.57 - ETA: 0s - loss: 0.6792 - accuracy: 0.57 - ETA: 0s - loss: 0.6782 - accuracy: 0.57 - 7s 47ms/step - loss: 0.6772 - accuracy: 0.5815 - val_loss: 0.6169 - val_accuracy: 0.6397
40/40 [==============================] - ETA: 3:17 - loss: 0.6921 - accuracy: 0.62 - ETA: 1s - loss: 0.6915 - accuracy: 0.6324 - ETA: 1s - loss: 0.6912 - accuracy: 0.62 - ETA: 1s - loss: 0.6908 - accuracy: 0.62 - ETA: 0s - loss: 0.6897 - accuracy: 0.61 - ETA: 1s - loss: 0.6894 - accuracy: 0.61 - ETA: 0s - loss: 0.6884 - accuracy: 0.61 - ETA: 0s - loss: 0.6878 - accuracy: 0.61 - ETA: 0s - loss: 0.6867 - accuracy: 0.61 - ETA: 0s - loss: 0.6856 - accuracy: 0.62 - ETA: 0s - loss: 0.6847 - accuracy: 0.62 - ETA: 0s - loss: 0.6837 - accuracy: 0.62 - ETA: 0s - loss: 0.6829 - accuracy: 0.62 - ETA: 0s - loss: 0.6820 - accuracy: 0.62 - ETA: 0s - loss: 0.6811 - accuracy: 0.62 - ETA: 0s - loss: 0.6801 - accuracy: 0.62 - ETA: 0s - loss: 0.6791 - accuracy: 0.62 - ETA: 0s - loss: 0.6782 - accuracy: 0.62 - ETA: 0s - loss: 0.6773 - accuracy: 0.62 - ETA: 0s - loss: 0.6764 - accuracy: 0.62 - ETA: 0s - loss: 0.6754 - accuracy: 0.62 - 7s 52ms/step - loss: 0.6744 - accuracy: 0.6255 - val_loss: 0.6133 - val_accuracy: 0.6397
Trial complete
Trial summary
Hyperparameters:
40/40 [==============================] - ETA: 3:19 - loss: 0.6969 - accuracy: 0.38 - ETA: 1s - loss: 0.6960 - accuracy: 0.4045 - ETA: 1s - loss: 0.6948 - accuracy: 0.44 - ETA: 1s - loss: 0.6938 - accuracy: 0.46 - ETA: 0s - loss: 0.6928 - accuracy: 0.48 - ETA: 0s - loss: 0.6919 - accuracy: 0.50 - ETA: 0s - loss: 0.6910 - accuracy: 0.51 - ETA: 0s - loss: 0.6906 - accuracy: 0.51 - ETA: 0s - loss: 0.6897 - accuracy: 0.52 - ETA: 0s - loss: 0.6887 - accuracy: 0.52 - ETA: 0s - loss: 0.6878 - accuracy: 0.53 - ETA: 0s - loss: 0.6869 - accuracy: 0.53 - ETA: 0s - loss: 0.6860 - accuracy: 0.54 - ETA: 0s - loss: 0.6852 - accuracy: 0.54 - ETA: 0s - loss: 0.6841 - accuracy: 0.55 - ETA: 0s - loss: 0.6830 - accuracy: 0.55 - ETA: 0s - loss: 0.6824 - accuracy: 0.55 - ETA: 0s - loss: 0.6813 - accuracy: 0.55 - ETA: 0s - loss: 0.6802 - accuracy: 0.56 - ETA: 0s - loss: 0.6790 - accuracy: 0.56 - ETA: 0s - loss: 0.6778 - accuracy: 0.56 - 7s 53ms/step - loss: 0.6767 - accuracy: 0.5686 - val_loss: 0.6139 - val_accuracy: 0.6404
40/40 [==============================] - ETA: 3:17 - loss: 0.6937 - accuracy: 0.38 - ETA: 1s - loss: 0.6928 - accuracy: 0.4857 - ETA: 1s - loss: 0.6917 - accuracy: 0.52 - ETA: 1s - loss: 0.6906 - accuracy: 0.54 - ETA: 1s - loss: 0.6895 - accuracy: 0.56 - ETA: 1s - loss: 0.6884 - accuracy: 0.57 - ETA: 0s - loss: 0.6874 - accuracy: 0.57 - ETA: 0s - loss: 0.6862 - accuracy: 0.58 - ETA: 0s - loss: 0.6852 - accuracy: 0.58 - ETA: 0s - loss: 0.6848 - accuracy: 0.59 - ETA: 0s - loss: 0.6837 - accuracy: 0.59 - ETA: 0s - loss: 0.6832 - accuracy: 0.59 - ETA: 0s - loss: 0.6823 - accuracy: 0.59 - ETA: 0s - loss: 0.6813 - accuracy: 0.59 - ETA: 0s - loss: 0.6804 - accuracy: 0.60 - ETA: 0s - loss: 0.6795 - accuracy: 0.60 - ETA: 0s - loss: 0.6786 - accuracy: 0.60 - ETA: 0s - loss: 0.6778 - accuracy: 0.60 - ETA: 0s - loss: 0.6769 - accuracy: 0.60 - ETA: 0s - loss: 0.6759 - accuracy: 0.60 - ETA: 0s - loss: 0.6750 - accuracy: 0.60 - 7s 57ms/step - loss: 0.6740 - accuracy: 0.6072 - val_loss: 0.6092 - val_accuracy: 0.6397
Trial complete
Trial summary
Hyperparameters:
40/40 [==============================] - ETA: 3:15 - loss: 0.6941 - accuracy: 0.39 - ETA: 1s - loss: 0.6932 - accuracy: 0.4744 - ETA: 1s - loss: 0.6921 - accuracy: 0.51 - ETA: 1s - loss: 0.6912 - accuracy: 0.53 - ETA: 0s - loss: 0.6904 - accuracy: 0.55 - ETA: 0s - loss: 0.6895 - accuracy: 0.55 - ETA: 0s - loss: 0.6887 - accuracy: 0.56 - ETA: 0s - loss: 0.6878 - accuracy: 0.56 - ETA: 0s - loss: 0.6868 - accuracy: 0.57 - ETA: 0s - loss: 0.6858 - accuracy: 0.57 - ETA: 0s - loss: 0.6848 - accuracy: 0.58 - ETA: 0s - loss: 0.6839 - accuracy: 0.58 - ETA: 0s - loss: 0.6834 - accuracy: 0.58 - ETA: 0s - loss: 0.6822 - accuracy: 0.58 - ETA: 0s - loss: 0.6810 - accuracy: 0.59 - ETA: 0s - loss: 0.6800 - accuracy: 0.59 - ETA: 0s - loss: 0.6790 - accuracy: 0.59 - ETA: 0s - loss: 0.6781 - accuracy: 0.59 - ETA: 0s - loss: 0.6772 - accuracy: 0.59 - ETA: 0s - loss: 0.6762 - accuracy: 0.59 - ETA: 0s - loss: 0.6753 - accuracy: 0.59 - 7s 52ms/step - loss: 0.6749 - accuracy: 0.5992 - val_loss: 0.6177 - val_accuracy: 0.6397
40/40 [==============================] - ETA: 3:16 - loss: 0.6944 - accuracy: 0.37 - ETA: 1s - loss: 0.6934 - accuracy: 0.4640 - ETA: 1s - loss: 0.6924 - accuracy: 0.50 - ETA: 0s - loss: 0.6916 - accuracy: 0.52 - ETA: 0s - loss: 0.6910 - accuracy: 0.53 - ETA: 0s - loss: 0.6903 - accuracy: 0.53 - ETA: 0s - loss: 0.6896 - accuracy: 0.54 - ETA: 0s - loss: 0.6888 - accuracy: 0.55 - ETA: 0s - loss: 0.6880 - accuracy: 0.55 - ETA: 0s - loss: 0.6871 - accuracy: 0.55 - ETA: 0s - loss: 0.6867 - accuracy: 0.56 - ETA: 0s - loss: 0.6862 - accuracy: 0.56 - ETA: 0s - loss: 0.6852 - accuracy: 0.56 - ETA: 0s - loss: 0.6842 - accuracy: 0.56 - ETA: 0s - loss: 0.6837 - accuracy: 0.57 - ETA: 0s - loss: 0.6832 - accuracy: 0.57 - ETA: 0s - loss: 0.6820 - accuracy: 0.57 - ETA: 0s - loss: 0.6810 - accuracy: 0.57 - ETA: 0s - loss: 0.6799 - accuracy: 0.57 - ETA: 0s - loss: 0.6789 - accuracy: 0.58 - ETA: 0s - loss: 0.6779 - accuracy: 0.58 - ETA: 0s - loss: 0.6768 - accuracy: 0.58 - 7s 52ms/step - loss: 0.6758 - accuracy: 0.5859 - val_loss: 0.6211 - val_accuracy: 0.6404
Trial complete
Trial summary
Hyperparameters:
40/40 [==============================] - ETA: 3:16 - loss: 0.6897 - accuracy: 0.59 - ETA: 0s - loss: 0.6885 - accuracy: 0.6176 - ETA: 0s - loss: 0.6871 - accuracy: 0.62 - ETA: 0s - loss: 0.6863 - accuracy: 0.62 - ETA: 0s - loss: 0.6849 - accuracy: 0.62 - ETA: 0s - loss: 0.6838 - accuracy: 0.62 - ETA: 0s - loss: 0.6821 - accuracy: 0.63 - ETA: 0s - loss: 0.6811 - accuracy: 0.63 - ETA: 0s - loss: 0.6798 - accuracy: 0.63 - ETA: 0s - loss: 0.6789 - accuracy: 0.63 - ETA: 0s - loss: 0.6781 - accuracy: 0.63 - ETA: 0s - loss: 0.6773 - accuracy: 0.63 - ETA: 0s - loss: 0.6765 - accuracy: 0.63 - ETA: 0s - loss: 0.6753 - accuracy: 0.63 - ETA: 0s - loss: 0.6745 - accuracy: 0.63 - ETA: 0s - loss: 0.6733 - accuracy: 0.63 - ETA: 0s - loss: 0.6724 - accuracy: 0.63 - 7s 42ms/step - loss: 0.6717 - accuracy: 0.6301 - val_loss: 0.6167 - val_accuracy: 0.6404
40/40 [==============================] - ETA: 3:17 - loss: 0.6930 - accuracy: 0.52 - ETA: 1s - loss: 0.6917 - accuracy: 0.5747 - ETA: 0s - loss: 0.6906 - accuracy: 0.59 - ETA: 0s - loss: 0.6899 - accuracy: 0.60 - ETA: 0s - loss: 0.6893 - accuracy: 0.60 - ETA: 0s - loss: 0.6886 - accuracy: 0.60 - ETA: 0s - loss: 0.6878 - accuracy: 0.60 - ETA: 0s - loss: 0.6870 - accuracy: 0.60 - ETA: 0s - loss: 0.6861 - accuracy: 0.60 - ETA: 0s - loss: 0.6852 - accuracy: 0.61 - ETA: 0s - loss: 0.6844 - accuracy: 0.61 - ETA: 0s - loss: 0.6831 - accuracy: 0.61 - ETA: 0s - loss: 0.6822 - accuracy: 0.61 - ETA: 0s - loss: 0.6809 - accuracy: 0.61 - ETA: 0s - loss: 0.6799 - accuracy: 0.61 - ETA: 0s - loss: 0.6789 - accuracy: 0.61 - ETA: 0s - loss: 0.6779 - accuracy: 0.61 - ETA: 0s - loss: 0.6769 - accuracy: 0.61 - ETA: 0s - loss: 0.6759 - accuracy: 0.61 - 7s 45ms/step - loss: 0.6750 - accuracy: 0.6169 - val_loss: 0.6152 - val_accuracy: 0.6404
Trial complete
Trial summary
Hyperparameters:
40/40 [==============================] - ETA: 3:16 - loss: 0.6937 - accuracy: 0.39 - ETA: 1s - loss: 0.6935 - accuracy: 0.4627 - ETA: 0s - loss: 0.6930 - accuracy: 0.49 - ETA: 0s - loss: 0.6925 - accuracy: 0.51 - ETA: 0s - loss: 0.6921 - accuracy: 0.52 - ETA: 0s - loss: 0.6916 - accuracy: 0.53 - ETA: 0s - loss: 0.6910 - accuracy: 0.54 - ETA: 0s - loss: 0.6901 - accuracy: 0.55 - ETA: 0s - loss: 0.6895 - accuracy: 0.55 - ETA: 0s - loss: 0.6889 - accuracy: 0.56 - ETA: 0s - loss: 0.6879 - accuracy: 0.56 - ETA: 0s - loss: 0.6873 - accuracy: 0.57 - ETA: 0s - loss: 0.6866 - accuracy: 0.57 - ETA: 0s - loss: 0.6858 - accuracy: 0.57 - ETA: 0s - loss: 0.6850 - accuracy: 0.57 - ETA: 0s - loss: 0.6843 - accuracy: 0.58 - ETA: 0s - loss: 0.6835 - accuracy: 0.58 - ETA: 0s - loss: 0.6828 - accuracy: 0.58 - ETA: 0s - loss: 0.6820 - accuracy: 0.58 - 7s 45ms/step - loss: 0.6813 - accuracy: 0.5880 - val_loss: 0.6360 - val_accuracy: 0.6404
40/40 [==============================] - ETA: 3:23 - loss: 0.6942 - accuracy: 0.38 - ETA: 1s - loss: 0.6935 - accuracy: 0.4492 - ETA: 0s - loss: 0.6929 - accuracy: 0.48 - ETA: 0s - loss: 0.6924 - accuracy: 0.50 - ETA: 0s - loss: 0.6918 - accuracy: 0.52 - ETA: 0s - loss: 0.6911 - accuracy: 0.53 - ETA: 0s - loss: 0.6904 - accuracy: 0.54 - ETA: 0s - loss: 0.6897 - accuracy: 0.55 - ETA: 0s - loss: 0.6891 - accuracy: 0.56 - ETA: 0s - loss: 0.6884 - accuracy: 0.56 - ETA: 0s - loss: 0.6878 - accuracy: 0.57 - ETA: 0s - loss: 0.6872 - accuracy: 0.57 - ETA: 0s - loss: 0.6866 - accuracy: 0.57 - ETA: 0s - loss: 0.6859 - accuracy: 0.58 - ETA: 0s - loss: 0.6852 - accuracy: 0.58 - ETA: 0s - loss: 0.6846 - accuracy: 0.58 - ETA: 0s - loss: 0.6839 - accuracy: 0.58 - ETA: 0s - loss: 0.6832 - accuracy: 0.58 - ETA: 0s - loss: 0.6825 - accuracy: 0.58 - ETA: 0s - loss: 0.6818 - accuracy: 0.59 - 7s 49ms/step - loss: 0.6812 - accuracy: 0.5924 - val_loss: 0.6327 - val_accuracy: 0.6404
Trial complete
Trial summary
Hyperparameters:
INFO:tensorflow:Oracle triggered exit
CPU times: user 1min 54s, sys: 2.85 s, total: 1min 57s
Wall time: 1min 32s
## Retrieve the best models from the tuner
models = tuner.get_best_models(num_models=2)
plot_model(models[0], show_shapes=True)
## Retrieve the summary of results from the tuner
tuner.results_summary()
Results summary
Trial summary
Hyperparameters:
Trial summary
Hyperparameters:
Trial summary
Hyperparameters:
Trial summary
Hyperparameters:
Trial summary
Hyperparameters:
Trial summary
Hyperparameters:
Explanation¶
Train Model with the Tuned Hyperparameters¶
EMBEDDING_DIM = 64
model6 = Sequential()
model6.add(
Embedding(input_dim=vocab_size,
output_dim=EMBEDDING_DIM,
input_length=max_len,
mask_zero=True))
model6.add(
layers.Bidirectional(
LSTM(64,
activation="relu",
name="lstm_layer",
dropout=0.2,
recurrent_dropout=0.5)))
model6.add(Dense(1, activation="sigmoid", name="output"))
model6.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=["accuracy"])
plot_model(model6)
history6 = model6.fit(X_train,
y_train,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
verbose=2,
validation_split=VALIDATION_SPLIT)
Epoch 1/20
40/40 - 8s - loss: 0.6570 - accuracy: 0.6261 - val_loss: 0.6158 - val_accuracy: 0.6404
Epoch 2/20
40/40 - 1s - loss: 0.5823 - accuracy: 0.6725 - val_loss: 0.5214 - val_accuracy: 0.7309
Epoch 3/20
40/40 - 1s - loss: 0.4951 - accuracy: 0.7610 - val_loss: 0.4929 - val_accuracy: 0.7640
Epoch 4/20
40/40 - 1s - loss: 0.4545 - accuracy: 0.7777 - val_loss: 0.4649 - val_accuracy: 0.7616
Epoch 5/20
40/40 - 1s - loss: 0.4419 - accuracy: 0.7856 - val_loss: 0.4566 - val_accuracy: 0.7703
Epoch 6/20
40/40 - 2s - loss: 0.4346 - accuracy: 0.7886 - val_loss: 0.4477 - val_accuracy: 0.7805
Epoch 7/20
40/40 - 1s - loss: 0.4276 - accuracy: 0.7945 - val_loss: 0.4450 - val_accuracy: 0.7821
Epoch 8/20
40/40 - 1s - loss: 0.4223 - accuracy: 0.7948 - val_loss: 0.4409 - val_accuracy: 0.7860
Epoch 9/20
40/40 - 1s - loss: 0.4196 - accuracy: 0.7988 - val_loss: 0.4349 - val_accuracy: 0.7844
Epoch 10/20
40/40 - 2s - loss: 0.4132 - accuracy: 0.8015 - val_loss: 0.4356 - val_accuracy: 0.7821
Epoch 11/20
40/40 - 2s - loss: 0.4099 - accuracy: 0.8035 - val_loss: 0.4314 - val_accuracy: 0.7907
Epoch 12/20
40/40 - 2s - loss: 0.4126 - accuracy: 0.8043 - val_loss: 0.4247 - val_accuracy: 0.7923
Epoch 13/20
40/40 - 1s - loss: 0.4032 - accuracy: 0.8082 - val_loss: 0.4231 - val_accuracy: 0.7954
Epoch 14/20
40/40 - 1s - loss: 0.4011 - accuracy: 0.8078 - val_loss: 0.4208 - val_accuracy: 0.7939
Epoch 15/20
40/40 - 2s - loss: 0.4008 - accuracy: 0.8080 - val_loss: 0.4181 - val_accuracy: 0.7970
Epoch 16/20
40/40 - 1s - loss: 0.3980 - accuracy: 0.8080 - val_loss: 0.4150 - val_accuracy: 0.7994
Epoch 17/20
40/40 - 1s - loss: 0.3968 - accuracy: 0.8143 - val_loss: 0.4206 - val_accuracy: 0.8041
Epoch 18/20
40/40 - 1s - loss: 0.3935 - accuracy: 0.8051 - val_loss: 0.4145 - val_accuracy: 0.8002
Epoch 19/20
40/40 - 1s - loss: 0.3957 - accuracy: 0.8122 - val_loss: 0.4144 - val_accuracy: 0.8049
Epoch 20/20
40/40 - 2s - loss: 0.3942 - accuracy: 0.8106 - val_loss: 0.4107 - val_accuracy: 0.8072
plot2(history6)
from lime.lime_text import LimeTextExplainer
explainer = LimeTextExplainer(class_names=['male'], char_level=True)
def model_predict_pipeline(text):
_seq = tokenizer.texts_to_sequences(text)
_seq_pad = keras.preprocessing.sequence.pad_sequences(_seq, maxlen=max_len)
#return np.array([[float(1-x), float(x)] for x in model.predict(np.array(_seq_pad))])
return model6.predict(np.array(_seq_pad))
# np.array(sequence.pad_sequences(
# tokenizer.texts_to_sequences([n for (n,l) in test_set]),
# maxlen = max_len)).astype('float32')
reversed_word_index = dict([(index, word)
for (word, index) in tokenizer.word_index.items()])
text_id = 305
X_test[text_id]
array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 18, 3, 12, 12, 7],
dtype=int32)
X_test_texts[text_id]
'Kimmo'
' '.join([reversed_word_index.get(i, '?') for i in X_test[text_id]])
'? ? ? ? ? ? ? ? ? ? k i m m o'
model_predict_pipeline([X_test_texts[text_id]])
array([[0.80037665]], dtype=float32)
exp = explainer.explain_instance(X_test_texts[text_id],
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
y_test[text_id]
1
exp = explainer.explain_instance('Tim',
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
exp = explainer.explain_instance('Michaelis',
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
exp = explainer.explain_instance('Sidney',
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
exp = explainer.explain_instance('Timber',
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
exp = explainer.explain_instance('Alvin',
model_predict_pipeline,
num_features=100,
top_labels=1)
exp.show_in_notebook(text=True)
References¶
Chollet (2017), Ch 3 and Ch 4